协作AI系统(CAISS)旨在与共同空间中的人类合作,实现共同目标。这一关键环境产生可能危害人类的危险情况。因此,建立具有符合要求,具体域标准和法规的强保证的这些系统具有最大的重要性。到目前为止,迄今为止仅报告了一些规模的影响,因为许多工作仍有待管理可能的风险。我们在这方面确定了新出现的问题,然后我们向我们的愿景报告,以及我们的多学科研究团队组成的软件/系统和机电一体化工程师的进展,以开发才能开发风险驱动的保证程序。
translated by 谷歌翻译
最小的平方和群集(MSSC)或K-Means型聚类,传统上被认为是无监督的学习任务。近年来,使用背景知识来提高集群质量,促进聚类过程的可解释性已成为数学优化和机器学习研究的热门研究课题。利用数据群集中的背景信息的问题称为半监督或约束群集。在本文中,我们为半监控MSSC提供了一种新的分支和绑定算法,其中背景知识被包含为成对必须 - 链接和无法链接约束。对于较低的界限,我们解决了MSSC离散优化模型的Semidefinite编程宽松,并使用了用于加强界限的纤维平面程序。相反,通过使用整数编程工具,我们提出了将K-Means算法适应受约束的情况。这是第一次,所提出的全局优化算法有效地管理,以解决现实世界的情况,最高可达800个数据点,具有必要的必须 - 链接和无法链接约束以及通用数量的功能。这个问题大小大约比最先进的精确算法解决的实例大约四倍。
translated by 谷歌翻译
Graph Neural Networks (GNNs) achieve state-of-the-art performance on graph-structured data across numerous domains. Their underlying ability to represent nodes as summaries of their vicinities has proven effective for homophilous graphs in particular, in which same-type nodes tend to connect. On heterophilous graphs, in which different-type nodes are likely connected, GNNs perform less consistently, as neighborhood information might be less representative or even misleading. On the other hand, GNN performance is not inferior on all heterophilous graphs, and there is a lack of understanding of what other graph properties affect GNN performance. In this work, we highlight the limitations of the widely used homophily ratio and the recent Cross-Class Neighborhood Similarity (CCNS) metric in estimating GNN performance. To overcome these limitations, we introduce 2-hop Neighbor Class Similarity (2NCS), a new quantitative graph structural property that correlates with GNN performance more strongly and consistently than alternative metrics. 2NCS considers two-hop neighborhoods as a theoretically derived consequence of the two-step label propagation process governing GCN's training-inference process. Experiments on one synthetic and eight real-world graph datasets confirm consistent improvements over existing metrics in estimating the accuracy of GCN- and GAT-based architectures on the node classification task.
translated by 谷歌翻译
The proliferation of radical online communities and their violent offshoots has sparked great societal concern. However, the current practice of banning such communities from mainstream platforms has unintended consequences: (I) the further radicalization of their members in fringe platforms where they migrate; and (ii) the spillover of harmful content from fringe back onto mainstream platforms. Here, in a large observational study on two banned subreddits, r/The\_Donald and r/fatpeoplehate, we examine how factors associated with the RECRO radicalization framework relate to users' migration decisions. Specifically, we quantify how these factors affect users' decisions to post on fringe platforms and, for those who do, whether they continue posting on the mainstream platform. Our results show that individual-level factors, those relating to the behavior of users, are associated with the decision to post on the fringe platform. Whereas social-level factors, users' connection with the radical community, only affect the propensity to be coactive on both platforms. Overall, our findings pave the way for evidence-based moderation policies, as the decisions to migrate and remain coactive amplify unintended consequences of community bans.
translated by 谷歌翻译
One of the major challenges in Deep Reinforcement Learning for control is the need for extensive training to learn the policy. Motivated by this, we present the design of the Control-Tutored Deep Q-Networks (CT-DQN) algorithm, a Deep Reinforcement Learning algorithm that leverages a control tutor, i.e., an exogenous control law, to reduce learning time. The tutor can be designed using an approximate model of the system, without any assumption about the knowledge of the system's dynamics. There is no expectation that it will be able to achieve the control objective if used stand-alone. During learning, the tutor occasionally suggests an action, thus partially guiding exploration. We validate our approach on three scenarios from OpenAI Gym: the inverted pendulum, lunar lander, and car racing. We demonstrate that CT-DQN is able to achieve better or equivalent data efficiency with respect to the classic function approximation solutions.
translated by 谷歌翻译
We investigate the sample complexity of learning the optimal arm for multi-task bandit problems. Arms consist of two components: one that is shared across tasks (that we call representation) and one that is task-specific (that we call predictor). The objective is to learn the optimal (representation, predictor)-pair for each task, under the assumption that the optimal representation is common to all tasks. Within this framework, efficient learning algorithms should transfer knowledge across tasks. We consider the best-arm identification problem for a fixed confidence, where, in each round, the learner actively selects both a task, and an arm, and observes the corresponding reward. We derive instance-specific sample complexity lower bounds satisfied by any $(\delta_G,\delta_H)$-PAC algorithm (such an algorithm identifies the best representation with probability at least $1-\delta_G$, and the best predictor for a task with probability at least $1-\delta_H$). We devise an algorithm OSRL-SC whose sample complexity approaches the lower bound, and scales at most as $H(G\log(1/\delta_G)+ X\log(1/\delta_H))$, with $X,G,H$ being, respectively, the number of tasks, representations and predictors. By comparison, this scaling is significantly better than the classical best-arm identification algorithm that scales as $HGX\log(1/\delta)$.
translated by 谷歌翻译
In recent years, there has been a growing interest in the effects of data poisoning attacks on data-driven control methods. Poisoning attacks are well-known to the Machine Learning community, which, however, make use of assumptions, such as cross-sample independence, that in general do not hold for linear dynamical systems. Consequently, these systems require different attack and detection methods than those developed for supervised learning problems in the i.i.d.\ setting. Since most data-driven control algorithms make use of the least-squares estimator, we study how poisoning impacts the least-squares estimate through the lens of statistical testing, and question in what way data poisoning attacks can be detected. We establish under which conditions the set of models compatible with the data includes the true model of the system, and we analyze different poisoning strategies for the attacker. On the basis of the arguments hereby presented, we propose a stealthy data poisoning attack on the least-squares estimator that can escape classical statistical tests, and conclude by showing the efficiency of the proposed attack.
translated by 谷歌翻译
在线平台面临着保持社区民用和尊重的压力。因此,从Reddit和Facebook等主流平台上有问题的在线社区的横幅通常会受到热情的公共反应。但是,该策略可以导致用户迁移到具有较低适度标准的替代边缘平台,以及在巨魔和骚扰等反社会行为被广泛接受的地方。由于这些社区的用户经常在主流和边缘平台上保留\ ca,反社会行为可能会溢出到主流平台上。我们通过分析来自迁移到边缘平台的三个被禁止社区的70,000美元的用户来研究这一可能的溢出:r/the \ _donald,r/r/gendericalitical和r/incels。使用差异差异设计,我们将\ CA用户与匹配的对应物进行了对比,以估算边缘平台参与用户对Reddit的反社会行为的因果效应。我们的结果表明,参与边缘社区会增加用户对Reddit的毒性(按照视角API的衡量),并参与了类似于被禁止社区的子雷数 - 这通常也违反了平台规范。效果随着时间的流逝和暴露于边缘平台而加剧。简而言之,我们发现通过共同参与从边缘平台到Reddit的反社会行为溢出的证据。
translated by 谷歌翻译
我们在各种诱导的稀疏性约束下,以相关神经体系结构在以对象为中心(基于插槽)表示的情况下,通过关系神经体系结构学到的软符号的合成性。我们发现,增加的稀疏性,尤其是在功能上,可以提高某些模型的性能,并导致更简单的关系。此外,我们观察到,当并非所有对象都完全捕获时,以对象为中心的表示可能会有害。CNN不太容易发生的故障模式。这些发现证明了解释性和绩效之间的权衡,即使对于旨在解决关系任务的模型也是如此。
translated by 谷歌翻译
我们提供了一个方程/可变的免费机器学习(EVFML)框架,以控制通过基于微观/代理模拟器建模的复杂/多尺度系统的集体动力学。该方法避免了构建替代物,还原级模型的需求。〜所提出的实现包括三个步骤:(a)来自基于高维代理的模拟,机器学习(尤其是非线性歧管学习(扩散图)(扩散地图) (DMS))有助于确定一组粗粒变量,该变量参数化了出现/集体动力学的低维歧管。从高维输入空间到低维歧管和背部,通过将DMS与NyStrom扩展和几何谐波耦合来求解;(b)已确定了歧管及其坐标,我们将方程式的方法利用了方程的方法对出现动力学执行数值分叉分析;然后,基于先前的步骤(C),我们设计了数据驱动的嵌入式洗涤控制器,该控制器将基于代理的模拟器驱动其内在的IM精确知道的,新兴的开环不稳定稳态,因此表明该方案对数值近似误差和建模不确定性是可靠的。交通动态模型和(ii)与哑剧的随机金融市场代理模型的平衡。
translated by 谷歌翻译